Can we leverage the audiovisual information already present in video to improve self-supervised representation learning? To answer this question, we study various pretraining architectures and objectives within the masked autoencoding framework, motivated by the success of similar methods in natural language and image understanding. We show that we can achieve significant improvements on audiovisual downstream classification tasks, surpassing the state-of-the-art on VGGSound and AudioSet. Furthermore, we can leverage our audiovisual pretraining scheme for multiple unimodal downstream tasks using a single audiovisual pretrained model. We additionally demonstrate the transferability of our representations, achieving state-of-the-art audiovisual results on Epic Kitchens without pretraining specifically for this dataset.
translated by 谷歌翻译
Medical image segmentation is an actively studied task in medical imaging, where the precision of the annotations is of utter importance towards accurate diagnosis and treatment. In recent years, the task has been approached with various deep learning systems, among the most popular models being U-Net. In this work, we propose a novel strategy to generate ensembles of different architectures for medical image segmentation, by leveraging the diversity (decorrelation) of the models forming the ensemble. More specifically, we utilize the Dice score among model pairs to estimate the correlation between the outputs of the two models forming each pair. To promote diversity, we select models with low Dice scores among each other. We carry out gastro-intestinal tract image segmentation experiments to compare our diversity-promoting ensemble (DiPE) with another strategy to create ensembles based on selecting the top scoring U-Net models. Our empirical results show that DiPE surpasses both individual models as well as the ensemble creation strategy based on selecting the top scoring models.
translated by 谷歌翻译
最近在文献中引入了用于视频异常检测的自我监督的多任务学习(SSMTL)框架。由于其准确的结果,该方法吸引了许多研究人员的注意。在这项工作中,我们重新审视了自我监督的多任务学习框架,并提出了对原始方法的几个更新。首先,我们研究各种检测方法,例如基于使用光流或背景减法检测高运动区域,因为我们认为当前使用的预训练的Yolov3是次优的,例如从未检测到运动中的对象或来自未知类的对象。其次,我们通过引入多头自发项模块的启发,通过引入多头自我发项模块,使3D卷积骨干链现代化。因此,我们替代地引入了2D和3D卷积视觉变压器(CVT)块。第三,为了进一步改善模型,我们研究了其他自我监督的学习任务,例如通过知识蒸馏来预测细分图,解决拼图拼图,通过知识蒸馏估算身体的姿势,预测掩盖的区域(Inpaining)和对抗性学习具有伪异常。我们进行实验以评估引入变化的性能影响。在找到框架的更有希望的配置后,称为SSMTL ++ V1和SSMTL ++ V2后,我们将初步实验扩展到了更多数据集,表明我们的性能提高在所有数据集中都是一致的。在大多数情况下,我们在大道,上海the夫和Ubnormal上的结果将最新的表现提升到了新的水平。
translated by 谷歌翻译
最近的研究表明,卷积神经网络不能很好地推广到小图像转换,例如旋转几个度或几个像素的翻译。为了提高这种转换的鲁棒性,我们建议除了应用于输入图像的常见数据增强外,还要在神经结构的中间层上引入数据增强。通过在各个级别的激活图(特征)中引入小小的扰动,我们开发了神经网络以应对这种转换的能力。我们在三个图像分类基准(Tiny Imagenet,Caltech-256和Food-101)上进行实验,考虑了两个不同的卷积架构(Resnet-18和Densenet-121)。与两种最先进的稳定方法相比,经验结果表明,我们的方法始终达到准确性和平均翻转率之间的最佳权衡。
translated by 谷歌翻译
我们研究了两种现实情景中的一系列识别任务,要求在强闭塞下分析面孔。一方面,我们的目标是识别佩戴虚拟现实(VR)耳机的人们的面部表情。另一方面,我们的目标是估计年龄并确定穿手术面具的人们的性别。对于所有这些任务,共同的地面是遮挡的一半面孔。在这一具有挑战性的环境中,我们表明,在完全可见的面上培训的卷积神经网络(CNNS)表现出非常低的性能水平。在微调遮挡面上的深度学习模型非常有用,我们表明可以通过从完全可见面上培训的模型蒸馏出来的知识来获得额外的性能增益。为此,我们研究了两种知识蒸馏方法,一个基于教师学生培训,一个基于三重态损失。我们的主要贡献包括基于三态损失的知识蒸馏的新方法,这遍历模型和任务。此外,我们考虑通过传统的师生培训或通过我们的小型教师学生培训来组合蒸馏模型,或通过基于三态损失的小说学生培训。我们提供了实证证据表明,在大多数情况下,个人和组合的知识蒸馏方法都会带来统计上显着的性能改进。我们在各种任务(面部表情识别,性别识别,年龄估计)上进行三种不同的神经模型(VGG-F,Vogg-Face,Reset-50)进行实验,而不管模型或任务如何,都显示出一致的改进。
translated by 谷歌翻译
检测视频中的异常事件通常被帧为单级分类任务,其中培训视频仅包含正常事件,而测试视频包含正常和异常事件。在这种情况下,异常检测是一个开放式问题。然而,一些研究吸收异常检测行动识别。这是一个封闭式场景,无法测试检测到新的异常类型时系统的能力。为此,我们提出UbnorMal,这是一个由多个虚拟场景组成的新的监督开放式基准,用于视频异常检测。与现有数据集不同,我们首次引入在训练时间的像素级别注释的异常事件,从而实现了用于异常事件检测的完全监督的学习方法。为了保留典型的开放式配方,我们确保在我们的培训和测试集合中包括Disjoint集的异常类型。据我们所知,Ubnormal是第一个视频异常检测基准,以允许一流的开放模型和监督闭合模型之间的公平头部比较,如我们的实验所示。此外,我们提供了实证证据,表明Ubnormal可以提高两个突出数据集,大道和上海学习的最先进的异常检测框架的性能。
translated by 谷歌翻译
为了对线性不可分离的数据进行分类,神经元通常被组织成具有至少一个隐藏层的多层神经网络。灵感来自最近神经科学的发现,我们提出了一种新的神经元模型以及一种新的激活函数,可以使用单个神经元来学习非线性决策边界。我们表明标准神经元随后是新颖的顶端枝晶激活(ADA)可以使用100 \%的精度来学习XOR逻辑函数。此外,我们在计算机视觉,信号处理和自然语言处理中进行五个基准数据集进行实验,即摩洛哥,utkface,crema-d,时尚mnist和微小的想象成,表明ADA和泄漏的ADA功能提供了卓越的结果用于各种神经网络架构的整流线性单元(Relu),泄漏的Relu,RBF和嗖嗖声,例如单隐层或两个隐藏层的多层的Perceptrons(MLPS)和卷积神经网络(CNNS),如LENET,VGG,RESET和字符级CNN。当我们使用具有顶端树突激活(Pynada)的金字塔神经元改变神经元的标准模型时,我们获得进一步的性能改进。我们的代码可用于:https://github.com/raduionescu/pynada。
translated by 谷歌翻译
Predicting the presence of major depressive disorder (MDD) using behavioural and cognitive signals is a highly non-trivial task. The heterogeneous clinical profile of MDD means that any given speech, facial expression and/or observed cognitive pattern may be associated with a unique combination of depressive symptoms. Conventional discriminative machine learning models potentially lack the complexity to robustly model this heterogeneity. Bayesian networks, however, may instead be well-suited to such a scenario. These networks are probabilistic graphical models that efficiently describe the joint probability distribution over a set of random variables by explicitly capturing their conditional dependencies. This framework provides further advantages over standard discriminative modelling by offering the possibility to incorporate expert opinion in the graphical structure of the models, generating explainable model predictions, informing about the uncertainty of predictions, and naturally handling missing data. In this study, we apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
translated by 谷歌翻译
现代AAA视频游戏具有巨大的游戏水平和地图,越来越难以详尽的测试人员覆盖。结果,游戏经常带着灾难性的虫子发货,例如玩家落在地板上或被卡在墙壁上。我们提出了一种基于功能强大的探索算法,Go-explore的模拟3D环境中针对可及性错误的方法,该方法在地图上保存了独特的检查点,然后确定有希望的探索。我们表明,当Go-explore与从游戏的导航网格中得出的简单启发式方法相结合时,发现了挑战性的错误,并全面探索了复杂的环境,而无需人类的演示或游戏动力学知识。探索大大优于更复杂的基线,包括增强学习,并在涵盖了发现的整个地图上的导航网格和独特位置的数量中都具有内在好奇心。最后,由于我们使用并行代理,我们的算法可以在10小时内在10小时内完全覆盖1.5公里x 1.5公里的游戏世界,这对于连续测试套件非常有希望。
translated by 谷歌翻译
Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the FlexiBiT framework, which provides a unified way to specify models which can be trained on many different sequential decision making tasks. We show that a single FlexiBiT model is simultaneously capable of carrying out many tasks with performance similar to or better than specialized models. Additionally, we show that performance can be further improved by fine-tuning our general model on specific tasks of interest.
translated by 谷歌翻译